1,273 research outputs found

    Femoral Strength Prediction using Finite Element Models : Validation of models based on CT and reconstructed DXA images against full-field strain measurements

    Get PDF
    Osteoporosis is defined as low bone density, and results in a markedly increased risk of skeletal fractures. It has been estimated that about 40% of all women above 50 years old will suffer from an osteoporotic fracture leading to hospitalization. Current osteoporosis diagnostics is largely based on statistical tools, using epidemiological parameters and bone mineral density (BMD) measured with dual energy X-ray absorptiometry (DXA). However, DXA-based BMD proved to be only a moderate predictor of bone strength. Therefore, novel methods that take into account all mechanical characteristics of the bone and their influence on bone resistance to fracture are advocated. Finite element (FE) models may improve the bone strength prediction accuracy, since they can account for the structural determinants of bone strength, and the variety of external loads acting on the bones during daily life. Several studies have proved that FE models can perform better than BMD as a bone strength predictor. However, these FE models are built from Computed Tomography (CT) datasets, as the 3D bone geometry is required, and take several hours of work by an experienced engineer. Moreover, the radiation dose for the patient is higher for CT than for DXA scan. All these factors contributed to the low impact that FE-based methods have had on the current clinical practice so far. This thesis work aimed at developing accurate and thoroughly validated FE models to enable a more accurate prediction of femoral strength. An accurate estimation of femoral strength could be used as one of the main determinant of a patient’s fracture risk during population screening. In the first part of the thesis, the ex vivo mechanical tests performed on cadaver human femurs are presented. Digital image correlation (DIC), an optical method that allows for a full-field measurement of the displacements over the femur surface, was used to retrieve strains during the test. Then, a subject-specific FE modelling technique able to predict the deformation state and the overall strength of human femurs is presented. The FE models were based on clinical images from 3D CT datasets, and were validated against the measurements collected during the ex vivo mechanical tests. Both the experimental setup with DIC and the FE modelling procedure have been initially tested using composite bones (only the FE part of the composite bone study is presented in this thesis). After that, the method was extended to human cadaver bones. Once validated against experimental strain measurements, the FE modelling procedure could be used to predict bone strength. In the last part of the thesis, the predictive ability of FE models based on the shape and BMD distribution reconstructed from a single DXA image using a statistical shape and appearance model (SSAM, developed outside this thesis) was assessed. The predictions were compared to the experimental measurements, and the obtained accuracy compared to that of CT-based FE models. The results obtained were encouraging. The CT-based FE models were able to predict the deformation state with very good accuracy when compared to thousands of full-field measurements from DIC (normalized root mean square error, NRMSE, below 11%), and, most importantly, could predict the femoral strength with an error below 2%. The performances of SSAM-based FE models were also promising, showing only a slight reduction of the performances when compared to the CT-based approach (NRMSE below 20% for the strain prediction, average strength prediction error of 12%), but with the significant advantage of the models being built from one single conventional DXA image. In conclusion, the concept of a new, accurate and semi-automatic FE modelling procedure aimed at predicting fracture risk on individuals was developed. The performances of CT-based and SSAM-based models were thoroughly compared, and the results support the future translation of SSAM-based FE model built from a single DXA image into the clinics. The developed tool could therefore allow to include a mechanistic information into the fracture risk screening, which may ultimately lead to an increased accuracy in the identification of the subjects at risk

    Extracting accurate strain measurements in bone mechanics: A critical review of current methods

    Get PDF
    Osteoporosis related fractures are a social burden that advocates for more accurate fracture prediction methods. Mechanistic methods, e.g. finite element models, have been proposed as a tool to better predict bone mechanical behaviour and strength. However, there is little consensus about the optimal constitutive law to describe bone as a material. Extracting reliable and relevant strain data from experimental tests is of fundamental importance to better understand bone mechanical properties, and to validate numerical models. Several techniques have been used to measure strain in experimental mechanics, with substantial differences in terms of accuracy, precision, time- and length-scale. Each technique presents upsides and downsides that must be carefully evaluated when designing the experiment. Moreover, additional complexities are often encountered when applying such strain measurement techniques to bone, due to its complex composite structure. This review of literature examined the four most commonly adopted methods for strain measurements (strain gauges, fibre Bragg grating sensors, digital image correlation, and digital volume correlation), with a focus on studies with bone as a substrate material, at the organ and tissue level. For each of them the working principles, a summary of the main applications to bone mechanics at the organ- and tissue-level, and a list of pros and cons are provided

    Consensus based optimization with memory effects: random selection and applications

    Get PDF
    In this work we extend the class of Consensus-Based Optimization (CBO) metaheuristic methods by considering memory effects and a random selection strategy. The proposed algorithm iteratively updates a population of particles according to a consensus dynamics inspired by social interactions among individuals. The consensus point is computed taking into account the past positions of all particles. While sharing features with the popular Particle Swarm Optimization (PSO) method, the exploratory behavior is fundamentally different and allows better control over the convergence of the particle system. We discuss some implementation aspects which lead to an increased efficiency while preserving the success rate in the optimization process. In particular, we show how employing a random selection strategy to discard particles during the computation improves the overall performance. Several benchmark problems and applications to image segmentation and Neural Networks training are used to validate and test the proposed method. A theoretical analysis allows to recover convergence guarantees under mild assumptions on the objective function. This is done by first approximating the particles evolution with a continuous-in-time dynamics, and then by taking the mean-field limit of such dynamics. Convergence to a global minimizer is finally proved at the mean-field level

    No Fermionic Wigs for BPS Attractors in 5 Dimensions

    Get PDF
    We analyze the fermionic wigging of 1/2-BPS (electric) extremal black hole attractors in N=2, D=5 ungauged Maxwell-Einstein supergravity theories, by exploiting anti-Killing spinors supersymmetry transformations. Regardless of the specific data of the real special geometry of the manifold defining the scalars of the vector multiplets, and differently from the D=4 case, we find that there are no corrections for the near--horizon attractor value of the scalar fields; an analogous result also holds for 1/2-BPS (magnetic) extremal black string. Thus, the attractor mechanism receives no fermionic corrections in D=5 (at least in the BPS sector).Comment: 24 pages, LaTeX2

    Mixture Differential Cryptanalysis and Structural Truncated Differential Attacks on round-reduced AES

    Get PDF
    At Eurocrypt 2017 the first secret-key distinguisher for 5-round AES -- based on the “multiple-of-8” property -- has been presented. Although it allows to distinguish a random permutation from an AES-like one, it seems rather hard to implement a key-recovery attack different than brute-force like using such a distinguisher. In this paper we introduce “Mixture Differential Cryptanalysis” on round-reduced AES-like ciphers, a way to translate the (complex) “multiple-of-8” 5-round distinguisher into a simpler and more convenient one (though, on a smaller number of rounds). Given a pair of chosen plaintexts, the idea is to construct new pairs of plaintexts by mixing the generating variables of the original pair of plaintexts. Here we theoretically prove that for 4-round AES the corresponding ciphertexts of the original pair of plaintexts lie in a particular subspace if and only if the corresponding pairs of ciphertexts of the new pairs of plaintexts have the same property. Such secret-key distinguisher -- which is independent of the secret-key, of the details of the S-Box and of the MixColumns matrix (except for the branch number equal to 5) -- can be used as starting point to set up new key-recovery attacks on round-reduced AES. Besides a theoretical explanation, we also provide a practical verification both of the distinguisher and of the attack. As a second contribution, we show how to combine this new 4-round distinguisher with a modified version of a truncated differential distinguisher in order to set up new 5-round distinguishers, that exploit properties which are independent of the secret key, of the details of the S-Box and of the MixColumns matrix. As a result, while a “classical” truncated differential distinguisher exploits the probability that a couple of texts satisfies or not a given differential trail independently of the others couples, our distinguishers work with sets of N >> 1 (related) couples of texts. In particular, our new 5-round AES distinguishers exploit the fact that such sets of texts satisfy some properties with a different probability than a random permutation. Even if such 5-round distinguishers have higher complexity than e.g. the “multiple-of-8” one present in the literature, one of them can be used as starting point to set up the first key-recovery attack on 6-round AES that exploits directly a 5-round secret-key distinguisher. The goal of this paper is indeed to present and explore new approaches, showing that even a distinguisher like the one presented at Eurocrypt -- believed to be hard to exploit - can be used to set up a key-recovery attack

    MixColumns Properties and Attacks on (round-reduced) AES with a Single Secret S-Box

    Get PDF
    In this paper, we present new key-recovery attacks on AES with a single secret S-Box. Several attacks for this model have been proposed in literature, the most recent ones at Crypto’16 and FSE’17. Both these attacks exploit a particular property of the MixColumns matrix to recover the secret-key. In this work, we show that the same attacks work exploiting a weaker property of the MixColumns matrix. As first result, this allows to (largely) increase the number of MixColumns matrices for which it is possible to set up all these attacks. As a second result, we present new attacks on 5-round AES with a single secret S-Box that exploit the new multiple-of-n property recently proposed at Eurocrypt’17. This property is based on the fact that choosing a particular set of plaintexts, the number of pairs of ciphertexts that lie in a particular subspace is a multiple of n

    Time-resolved reflectance spectroscopy as a management tool for late-maturing nectarine supply chain

    Get PDF
    The absorption coefficient of the fruit flesh at 670 nm (mu(a)), measured at harvest by time-resolved reflectance spectroscopy (TRS) is a good maturity index for early nectarine cultivars. A kinetic model has been developed linking the mu(a), expressed as the biological shift factor to softening during ripening. This allows shelf life prediction for individual fruit from the value of mu(a) at harvest and the fruit categorization into predicted softening and usability classes. In this work, the predictive capacity of a kinetic model developed using mu(a) data at harvest and firmness data within 1-2 d after harvest for a late maturing nectarine cultivar ('Morsiani 90') was tested for prediction and classification ability. Compared to early maturing cultivars, mu(a) at harvest had low values and low variability, indicating advanced maturity, whereas firmness was similar. Hence, fruit were categorized into six usability classes (from 'transportable-hard' to 'ready-to-eat-very soft') basing on mu(a) limits established analyzing firmness data in shelf life after harvest. The model was tested by comparing the predicted firmness and class of usability to the actual ones measured during ripening and its performance compared to that of models based on data during the whole shelf life at 20 degrees C after harvest and after storage at 0 degrees C and 4 degrees C. The model showed a classification ability very close to that of models based on data of the whole shelf life, and was able to correctly segregate the 'ready-to-eat-transportable', 'transportable' and 'transportable-hard' classes for ripening at harvest and after storage at 0 degrees C, and the 'ready-to-eat-very soft' and 'ready-to-eat-soft' classes for ripening after storage at 4 degrees C, with lower performance of models for fruit after storage at 4 degrees C respect to those of the other two ripening

    MiMC:Efficient Encryption and Cryptographic Hashing with Minimal Multiplicative Complexity

    Get PDF
    We explore cryptographic primitives with low multiplicative complexity. This is motivated by recent progress in practical applications of secure multi-party computation (MPC), fully homomorphic encryption (FHE), and zero-knowledge proofs (ZK) where primitives from symmetric cryptography are needed and where linear computations are, compared to non-linear operations, essentially ``free\u27\u27. Starting with the cipher design strategy ``LowMC\u27\u27 from Eurocrypt 2015, a number of bit-oriented proposals have been put forward, focusing on applications where the multiplicative depth of the circuit describing the cipher is the most important optimization goal. Surprisingly, albeit many MPC/FHE/ZK-protocols natively support operations in \GF{p} for large pp, very few primitives, even considering all of symmetric cryptography, natively work in such fields. To that end, our proposal for both block ciphers and cryptographic hash functions is to reconsider and simplify the round function of the Knudsen-Nyberg cipher from 1995. The mapping F(x):=x3F(x) := x^3 is used as the main component there and is also the main component of our family of proposals called ``MiMC\u27\u27. We study various attack vectors for this construction and give a new attack vector that outperforms others in relevant settings. Due to its very low number of multiplications, the design lends itself well to a large class of new applications, especially when the depth does not matter but the total number of multiplications in the circuit dominates all aspects of the implementation. With a number of rounds which we deem secure based on our security analysis, we report on significant performance improvements in a representative use-case involving SNARKs
    corecore